Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Biomed Inform ; 150: 104600, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38301750

RESUMO

BACKGROUND: Lack of trust in artificial intelligence (AI) models in medicine is still the key blockage for the use of AI in clinical decision support systems (CDSS). Although AI models are already performing excellently in systems medicine, their black-box nature entails that patient-specific decisions are incomprehensible for the physician. Explainable AI (XAI) algorithms aim to "explain" to a human domain expert, which input features influenced a specific recommendation. However, in the clinical domain, these explanations must lead to some degree of causal understanding by a clinician. RESULTS: We developed the CLARUS platform, aiming to promote human understanding of graph neural network (GNN) predictions. CLARUS enables the visualisation of patient-specific networks, as well as, relevance values for genes and interactions, computed by XAI methods, such as GNNExplainer. This enables domain experts to gain deeper insights into the network and more importantly, the expert can interactively alter the patient-specific network based on the acquired understanding and initiate re-prediction or retraining. This interactivity allows us to ask manual counterfactual questions and analyse the effects on the GNN prediction. CONCLUSION: We present the first interactive XAI platform prototype, CLARUS, that allows not only the evaluation of specific human counterfactual questions based on user-defined alterations of patient networks and a re-prediction of the clinical outcome but also a retraining of the entire GNN after changing the underlying graph structures. The platform is currently hosted by the GWDG on https://rshiny.gwdg.de/apps/clarus/.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Médicos , Humanos , Inteligência Artificial , Redes Neurais de Computação , Algoritmos , Tolnaftato
2.
Patterns (N Y) ; 4(8): 100788, 2023 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-37602217

RESUMO

Artificial intelligence (AI) today is very successful at standard pattern-recognition tasks due to the availability of large amounts of data and advances in statistical data-driven machine learning. However, there is still a large gap between AI pattern recognition and human-level concept learning. Humans can learn amazingly well even under uncertainty from just a few examples and are capable of generalizing these concepts to solve new conceptual problems. The growing interest in explainable machine intelligence requires experimental environments and diagnostic/benchmark datasets to analyze existing approaches and drive progress in pattern analysis and machine intelligence. In this paper, we provide an overview of current AI solutions for benchmarking concept learning, reasoning, and generalization; discuss the state-of-the-art of existing diagnostic/benchmark datasets (such as CLEVR, CLEVRER, CLOSURE, CURI, Bongard-LOGO, V-PROM, RAVEN, Kandinsky Patterns, CLEVR-Humans, CLEVRER-Humans, and their extension containing human language); and provide an outlook of some future research directions in this exciting research domain.

3.
Sensors (Basel) ; 22(8)2022 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-35459028

RESUMO

The main impetus for the global efforts toward the current digital transformation in almost all areas of our daily lives is due to the great successes of artificial intelligence (AI), and in particular, the workhorse of AI, statistical machine learning (ML). The intelligent analysis, modeling, and management of agricultural and forest ecosystems, and of the use and protection of soils, already play important roles in securing our planet for future generations and will become irreplaceable in the future. Technical solutions must encompass the entire agricultural and forestry value chain. The process of digital transformation is supported by cyber-physical systems enabled by advances in ML, the availability of big data and increasing computing power. For certain tasks, algorithms today achieve performances that exceed human levels. The challenge is to use multimodal information fusion, i.e., to integrate data from different sources (sensor data, images, *omics), and explain to an expert why a certain result was achieved. However, ML models often react to even small changes, and disturbances can have dramatic effects on their results. Therefore, the use of AI in areas that matter to human life (agriculture, forestry, climate, health, etc.) has led to an increased need for trustworthy AI with two main components: explainability and robustness. One step toward making AI more robust is to leverage expert knowledge. For example, a farmer/forester in the loop can often bring in experience and conceptual understanding to the AI pipeline-no AI can do this. Consequently, human-centered AI (HCAI) is a combination of "artificial intelligence" and "natural intelligence" to empower, amplify, and augment human performance, rather than replace people. To achieve practical success of HCAI in agriculture and forestry, this article identifies three important frontier research areas: (1) intelligent information fusion; (2) robotics and embodied intelligence; and (3) augmentation, explanation, and verification for trusted decision support. This goal will also require an agile, human-centered design approach for three generations (G). G1: Enabling easily realizable applications through immediate deployment of existing technology. G2: Medium-term modification of existing technology. G3: Advanced adaptation and evolution beyond state-of-the-art.


Assuntos
Inteligência Artificial , Robótica , Ecossistema , Fazendas , Florestas , Humanos
4.
Kunstliche Intell (Oldenbourg) ; 36(3-4): 271-285, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36590103

RESUMO

Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain's and user's perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...